hysop.core.mpi.redistribute module¶
Implementation for data transfer/redistribution between topologies
`.. currentmodule : hysop.core.mpi.redistribute
See hysop.operator.redistribute.Redistribute for automatic redistribute deployment.
RedistributeOperatorBase
abstract base classRedistributeIntra
for topologies/operators defined inside the same mpi communicatorRedistributeInter
for topologies/operators defined on two different mpi communicatorRedistributeOverlap
for topologies defined inside the same mpi parent communicator and with a different number of processes
- class hysop.core.mpi.redistribute.RedistributeInter(other_task_id=None, **kwds)[source]¶
Bases:
RedistributeOperatorBase
Data transfer between two operators/topologies. Source and target must:
Data transfer between two operators/topologies. Source and target must:
- apply(**kwds)¶
Abstract method that should be implemented. Applies this node (operator, computational graph operator…).
- classmethod can_redistribute(source_topo, target_topo, other_task_id=None, **kwds)[source]¶
Return true if this RedistributeOperatorBase can be applied to redistribute a variable from source_topo to target_topo, else return False.
- discretize()[source]¶
By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.
After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.
self.discrete_fields will be a tuple containing all input and output discrete fields.
Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.
- get_field_requirements()[source]¶
Called just after handle_method(), ie self.method has been set. Field requirements are:
required local and global transposition state, if any.
required memory ordering (either C or Fortran)
Default is Backend.HOST, no min or max ghosts, MemoryOrdering.ANY and no specific default transposition state for each input and output variables.
- get_preserved_input_fields()[source]¶
This Inter-communicator redistribute is preserving the output fields. output fields are invalidated on other topologies only if field is not also an input
- output_topology_state(output_field, input_topology_states)[source]¶
Determine a specific output discrete topology state given all input discrete topology states.
Must be redefined to help correct computational graph generation. By default, just return first input state if all input states are all the same.
If input_topology_states are different, raise a RuntimeError as default behaviour. Operators altering the state of their outputs have to override this method.
The state may include transposition state, memory order and more. see hysop.topology.transposition_state.TranspositionState for the complete list.
- class hysop.core.mpi.redistribute.RedistributeInterParam(parameter, source_topo, target_topo, other_task_id, domain, **kwds)[source]¶
Bases:
ComputationalGraphOperator
parameter transfer between two operators/topologies. Source and target must:
*be MPIParams defined on different communicators
Communicate parameter through tasks
parameter¶
- parameter: tuple of ScalarParameter or TensorParameter
parameters to communicate
source_topo: MPIParam target_topo: MPIParam
- apply(**kwds)¶
Abstract method that should be implemented. Applies this node (operator, computational graph operator…).
- class hysop.core.mpi.redistribute.RedistributeIntra(**kwds)[source]¶
Bases:
RedistributeOperatorBase
Data transfer between two operators/topologies. Source and target must:
Data transfer between two operators/topologies defined on the same communicator
- Source and target must:
*be defined on the same communicator *work on the same number of mpi process *work with the same global resolution
- apply(**kwds)¶
Abstract method that should be implemented. Applies this node (operator, computational graph operator…).
- classmethod can_redistribute(source_topo, target_topo, **kwds)[source]¶
Return true if this RedistributeOperatorBase can be applied to redistribute a variable from source_topo to target_topo, else return False.
- discretize()[source]¶
By default, an operator discretize all its variables. For each input continuous field that is also an output field, input topology may be different from the output topology.
After this call, one can access self.input_discrete_fields and self.output_discrete_fields, which contains input and output dicretised fields mapped by continuous fields.
self.discrete_fields will be a tuple containing all input and output discrete fields.
Discrete tensor fields are built back from discretized scalar fields and are accessible from self.input_tensor_fields, self.output_tensor_fields and self.discrete_tensor_fields like their scalar counterpart.